Draw and Tell: Multimodal Descriptions Outperform Verbal- or Sketch-Only Descriptions in an Image Retrieval Task
نویسندگان
چکیده
While language conveys meaning largely symbolically, actual communication acts typically contain iconic elements as well: People gesture while they speak, or may even draw sketches while explaining something. Image retrieval prima facie seems like a task that could profit from combined symbolic and iconic reference, but it is typically set up to work either from language only, or via (iconic) sketches with no verbal contribution. Using a model of grounded language semantics and a model of sketch-to-image mapping, we show that adding even very reduced iconic information to a verbal image description improves recall. Verbal descriptions paired with fully detailed sketches still perform better than these sketches alone. We see these results as supporting the assumption that natural user interfaces should respond to multimodal input, where possible, rather than just language alone.
منابع مشابه
Draw and Tell: Multimodal Descriptions Outperform Verbal- or Sketch-Only Descriptions in an Image Retrieval Task
While language conveys meaning largely symbolically, actual communication acts typically contain iconic elements as well: People gesture while they speak, or may even draw sketches while explaining something. Image retrieval prima facie seems like a task that could profit from combined symbolic and iconic reference, but it is typically set up to work either from language only, or via (iconic) s...
متن کاملThe or That: Definite and Demonstrative Descriptions in Second Language Acquisition
Since Heubner's (1985) pioneering study, there have been many studies on (mis) use/ non-use of articles by L2 learners from article-less and article languages. The present study investigated how Persian L2 learners of English produce and interpret English definite descriptions and demonstrative descriptions. It was assumed that definite and demonstrative descriptions share the same central sema...
متن کاملDisambiguating Visual Verbs
In this article, we introduce a new task, visual sense disambiguation for verbs: given an image and a verb, assign the correct sense of the verb, i.e., the one that describes the action depicted in the image. Just as textual word sense disambiguation is useful for a wide range of NLP tasks, visual sense disambiguation can be useful for multimodal tasks such as image retrieval, image description...
متن کاملSaliency Cognition of Urban Monuments Based on Verbal Descriptions of Mental-Spatial Representations (Case Study: Urban Monuments in Qazvin)
Urban monuments encompass a wide range of architectural works either intentionally or unintentionally. These works are often salient due to their inherently explicit or hidden components and qualities in the urban context. Therefore, they affect the mental-spatial representations of the environment and make the city legible. However, the ambiguity of effective components often complicates their...
متن کاملSHEF-Multimodal: Grounding Machine Translation on Images
This paper describes the University of Sheffield’s submission for the WMT16 Multimodal Machine Translation shared task, where we participated in Task 1 to develop German-to-English and Englishto-German statistical machine translation (SMT) systems in the domain of image descriptions. Our proposed systems are standard phrase-based SMT systems based on the Moses decoder, trained only on the provi...
متن کامل